
Multifidelity Neural Operator for PDE Problems
Please login to view abstract download link
Operator learning frameworks have recently emerged as an alternative solution to learn PDE from data. Training the operator learning on available data allows researchers to reduce the computational burden of traditional methods, enabling faster inference of the PDE solutions under various conditions. Operator learning frameworks such as deep operator networks (DeepONet) and neural operators (NO) are particularly effective due to their ability to learn nonlinear mappings between two infinite-dimensional function spaces. This capability enables the operator learning frameworks to deliver the best solution approximation for PDE problems. While both DeepONet and NO are capable at addressing PDE problems, this research only focuses on the NO framework. The training process of the operator learning models requires a considerable amount of high-fidelity data which could still be expensive to obtain. To alleviate the issue, we incorporate the concept of multi-fidelity learning by using a large amount of inexpensive data sources that are well-correlated with the expensive high-fidelity data alongside using a small amount of the high-fidelity data. This study presents a comparative analysis of several multi-fidelity architectures, including the intermediate model, residual model, and multi-step models across several PDE test cases to assess the effectiveness of multi-fidelity learning applied to neural operator framework. Our primary goal is to identify an architecture capable of capturing the nonlinear relationships between lower and higher-fidelity data. We implemented this concept in our preliminary test cases, specifically focusing on the 1D stochastic Poisson equation and the 2D Darcy flow within a triangular domain. The initial findings suggest that multi-fidelity neural operator architectures outperform high-fidelity models under limited data.